Data Compaction of Neural Networks by Error Diffusion Type Quantization

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

modeling loss data by phase-type distribution

بیمه گران همیشه بابت خسارات بیمه نامه های تحت پوشش خود نگران بوده و روش هایی را جستجو می کنند که بتوانند داده های خسارات گذشته را با هدف اتخاذ یک تصمیم بهینه مدل بندی نمایند. در این پژوهش توزیع های فیزتایپ در مدل بندی داده های خسارات معرفی شده که شامل استنباط آماری مربوطه و استفاده از الگوریتم em در برآورد پارامترهای توزیع است. در پایان امکان استفاده از این توزیع در مدل بندی داده های گروه بندی ...

Robust stability of fuzzy Markov type Cohen-Grossberg neural networks by delay decomposition approach

In this paper, we investigate the delay-dependent robust stability of fuzzy Cohen-Grossberg neural networks with Markovian jumping parameter and mixed time varying delays by delay decomposition method. A new Lyapunov-Krasovskii functional (LKF) is constructed by nonuniformly dividing discrete delay interval into multiple subinterval, and choosing proper functionals with different weighting matr...

متن کامل

Identification of Crack Location and Depth in a Structure by GMDH- type Neural Networks and ANFIS

The Existence of crack in a structure leads to local flexibility and changes  the stiffness and dynamic behavior of the structure. The dynamic behavior of the cracked structure depends on the depth and the location of the crack. Hence, the changes in the dynamic behavior in the structure due to the crack can be used for identifying the location and depth of the crack. In this study the first th...

متن کامل

Training Recurrent Neural Networks by Diffusion

This work presents a new algorithm for training recurrent neural networks (although ideas are applicable to feedforward networks as well). The algorithm is derived from a theory in nonconvex optimization related to the diffusion equation. The contributions made in this work are two fold. First, we show how some seemingly disconnected mechanisms used in deep learning such as smart initialization...

متن کامل

Resiliency of Deep Neural Networks under Quantization

The complexity of deep neural network algorithms for hardware implementation can be much lowered by optimizing the word-length of weights and signals. Direct quantization of floating-point weights, however, does not show good performance when the number of bits assigned is small. Retraining of quantized networks has been developed to relieve this problem. In this work, the effects of quantizati...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Transactions of the Institute of Systems, Control and Information Engineers

سال: 2019

ISSN: 1342-5668,2185-811X

DOI: 10.5687/iscie.32.133